Dr. William L. Sanders is a senior research fellow with the University of North Carolina. He developed the Tennessee Value-Added Assessment System (TVAAS), also known as the Educational Value-Added Assessment System (EVAAS), a method for measuring a teacher's effect on student performance by tracking the progress of students against themselves over the course of their school career with their assignment to various teachers' classes. This system has been used in Tennessee since 1993, and has been adopted by a number of other school districts across the United States. Sanders' approach has been used to support the theory that the quality of teachers is central to educational achievement.[1]
The Pennsylvania and New Hampshire Departments of Education sponsor pilots, and the Iowa School Board Association sponsors his value-added work in that state. Battelle for Kids provides interpretation and use trainings for the SAS EVAAS services for the participating districts in Ohio.
“Using mixed model equations, TVAAS uses the covariance matrix from this multivariate, longitudinal data set to evaluate the impact of the educational system on student progress in comparison to national norms, with data reports at the district, school, and teacher levels." [2] The model focuses on academic gains rather than raw achievement scores.
Dr. Ballou, in Lissitz (Ed.), 2005, "Value Added Models in Education: Theory and Applications," analyzed the TVAAS and determined that value added-assessment of teachers are fallible estimates of teacher contribution to student learning, stating that standard errors of value-added estimates are large. Author thinks that value added models are merely one useful tool that should be used as one of many assessments in a comprehensive system of evaluation.
Researchers from the RAND corporation studied Dr. Sanders' method and determined that his approach does not satisfactorily account for bias, cautioning that non-educational effects may be attributed by mistake to teachers, with no way of effectively determining the magnitude of the error.[3] Ballou (2002) and Kupermintz (2003) further support this claim, claiming that non-educational factors have a noticeable impact on the evaluation of teachers despite efforts to account for them in the model.[4]